专利摘要:
The invention is a device allowing an observation of a sample according to a first modality, by imaging without a lens. This first modality makes it possible to obtain a first image, from which a region of interest of the sample can be identified. The device then makes it possible to analyze the region of interest using a second, more precise modality, in particular using an optical system.
公开号:FR3057068A1
申请号:FR1659432
申请日:2016-09-30
公开日:2018-04-06
发明作者:Cedric ALLIER;Thomas Bordy;Olivier CIONI;Lionel Herve;Sophie MOREL
申请人:Commissariat a lEnergie Atomique CEA;Commissariat a lEnergie Atomique et aux Energies Alternatives CEA;
IPC主号:
专利说明:

Holder (s): COMMISSIONER FOR ATOMIC ENERGY AND ALTERNATIVE ENERGIES Public establishment.
Extension request (s)
Agent (s): INNOVATION COMPETENCE GROUP.
DEVICE OF A SAMPLE AND A PROCESS OF DOBSERVATION OF A SAMPLE
at.
FR 3 057 068 - A1 (o /) The invention is a device allowing observation of a sample according to a first method, by imaging without a lens. This first modality makes it possible to obtain a first image, from which a region of interest of the sample can be identified. The device then allows an analysis of the region of interest using a second, more precise method, in particular using an optical system.
i
Sample observation device and sample observation method
Description
TECHNICAL AREA
The technical field of the invention is microscopy, combining a conventional modality with an optical enlargement objective and an imaging modality without a lens.
PRIOR ART
The observation of samples, and in particular biological samples, by lensless imaging, has experienced significant development over the past ten years. This technique makes it possible to observe a sample by placing it between a light source and an image sensor, without having an optical magnification lens between the sample and the image sensor. Thus, the image sensor collects an image of the light wave transmitted by the sample. This image, also called hologram, is made up of interference figures between the light wave emitted by the light source and transmitted by the sample, and diffraction waves, resulting from the diffraction by the sample of the wave. light emitted by the light source. These interference figures are sometimes called diffraction figures, or designated by the English term "diffraction pattern".
The document WO2008090330 describes a device allowing the observation of samples comprising cells by lensless imaging. This document shows that lensless imaging can be used for microscopy, to count cells. This document also shows that based on a morphological analysis of diffraction patterns, certain cells can be identified.
One of the advantages of imaging without a lens is to obtain quality images while having a significantly larger field of observation than that of a microscope. However, the hologram does not allow reliable observation of cells, or other diffusing elements of a sample, when their concentration increases. The hologram can then be processed by a holographic reconstruction algorithm, so as to obtain a reconstructed image representing a characteristic, for example the module or the phase, of the light wave passing through the sample and propagating towards the sensor. picture. This type of algorithm is well known in the field of holographic reconstruction. An example of a holographic reconstruction algorithm is described in the publication Ryle et al, "Digital in-line holography of biological specimens", Proc. Of SPIE Vol. 6311 (2006). However, such an algorithm can give rise to the appearance of reconstruction noise, designated by the term twin image, on the reconstructed image.
Application US2012 / 0218379 describes a method for reconstructing a complex image of a sample, the latter comprising amplitude and phase information, by limiting the twin image. Application US2012 / 0148141 applies the method described in application US2012 / 0218379 to reconstruct a complex image of spermatozoa and to characterize their mobility. This last request describes a so-called tracking algorithm, which makes it possible to follow the trajectory of the spermatozoa.
The inventors propose a device as well as an observation method, making it possible to combine the large field of observation conferred by lensless imagery, and a finer analysis by a more precise observation mode.
STATEMENT OF THE INVENTION
A first object of the invention is a device for the observation of a sample comprising: a support, intended to hold the sample;
a first light source, capable of emitting an incident light wave propagating to the sample;
a first image sensor, capable of acquiring a first image of the sample illuminated by the incident light wave, the support being configured to hold the sample between the first light source and the first image sensor, such so that no magnification optics are arranged between the sample and the first image sensor, the first image sensor being exposed to a light wave called exposure, the first image defining a first field of observation of the sample;
the device also comprising:
a second image sensor, optically coupled to an optical system having a magnification greater than 1, so as to acquire a second image of the sample, maintained on the support, according to a second field of observation reduced relative to the first field of observation.
The optical system may in particular be an objective, for example a microscope objective, of magnification greater than 5, or even greater than 10.
The device may include a second light source, capable of illuminating the sample during the acquisition of the second image of the sample. The second light source can be confused with the first light source.
The device may include a mechanism for relative displacement of the sample relative to the first image sensor and to the optical system, so as to alternate between:
a first mode, according to which the sample is placed in the observation field of the first image sensor, so as to acquire the first image; a second method, according to which the sample is placed in the observation field of the second image sensor, so as to acquire the second image.
The movement mechanism can be a plate able to translate or to rotate.
The device can include:
a selector, able to allow the selection of a region of interest in the first image, or from the first image. The selector can be operated manually. It can in particular be a computer peripheral of the mouse or keyboard type.
a processor, configured to determine a relative position of the sample relative to the optical system according to which the selected area of interest extends in the second field of observation;
such that the movement mechanism is configured to automatically position the sample relative to the optical system according to said relative position determined by the processor. Thus, the area of interest of the sample can be observed by the second image sensor, through the optical system.
The device may include a processor configured to apply a digital propagation operator from the first image, so as to:
calculating a complex expression of the exposure light wave according to a reconstruction surface, and in particular a reconstruction plane, extending opposite the first image sensor, defining a complex image;
form an image, called reconstructed, from the module and / or the phase of said complex expression, so that in the second modality, so that the position of the sample relative to the optical system is defined as a function of an area of interest selected from the reconstructed image.
By reconstructed image is meant an image representing the module of the exposure light wave, or the phase of the exposure light wave, or their combination, the reconstructed image being formed from the complex image obtained. by applying the digital propagation operator on the first image.
According to one embodiment, the first image sensor extends along a detection plane and the device includes a processor configured to apply digital focus (or digital autofocus) from the first image, so as to estimate a distance between the sample and the detection plane, at the region of interest, so that the relative position of the sample relative to the optical system is determined as a function of the distance thus estimated.
According to one embodiment, the first image sensor and the second image sensor are fixed, and the movement mechanism is able to move the sample:
facing the first image sensor in the first mode; and / or facing the optical system in the second modality.
According to one embodiment, the sample is fixed; the movement mechanism is able to: move the first image sensor to bring it facing the sample, in the first mode;
and / or move the optical system, and possibly the second image sensor, to bring it facing the sample, in the second mode.
Another object of the invention is a method of observing a sample comprising the following steps:
Another object of the invention is a method of observing a sample comprising the following steps:
a) illumination of the sample using a first light source;
b) acquisition of an image of the sample, called the first image, using a first image sensor, the image sensor being exposed to a light wave called exposure, the sample being maintained between the first light source and the first image sensor, no magnification optics being arranged between the first image sensor and the sample,
c) selection of a region of interest of the sample from the first image.
According to one embodiment, the method also includes the following steps:
d) relative displacement of the sample with respect to an optical system, in particular a lens, having a magnification greater than 1, the optical system being optically coupled to a second image sensor, the displacement being effected automatically by a displacement mechanism , so that the region of interest of the sample is located in an observation field, called the second observation field, of the second image sensor;
e) illumination of the sample using a second light source and acquisition of an image of the region of interest of the sample, called the second image, using the second image sensor.
The second light source can be confused with the first light source. The first image sensor can extend along a detection plane. The relative movement of the sample can in particular make it possible to automatically switch between:
a first mode, according to which the sample is placed in an observation field of the first image sensor, called the first observation field, so as to acquire the first image;
a second method, according to which the sample is placed in the observation field of the second image sensor, so as to acquire the second image.
According to one embodiment, during step c), the region of interest is selected on the first image, using a manual selector, for example a computer mouse or a keyboard, or by a analysis carried out from the first image, the analysis being based on a previously defined selection criterion, and implemented by a processor.
According to one embodiment, step c) comprises the following sub-steps:
ci) application of a propagation operator from the first image, so as to calculate a complex expression of the exposure light wave according to a reconstruction surface, extending in front of the detection plane, defining a complex image ;
cii) from the calculated complex image, formation of an image, called reconstructed image, as a function of the module and / or of the phase of the complex expression;
ciii) selection of the area of interest from the reconstructed image.
By from the first image is meant from the first image, or by an image obtained from the first image, for example after truncation or normalization or the application of a filter.
During sub-step ciii), the region of interest can be determined on the reconstructed image, using a manual selector or by an analysis of the reconstructed image, the analysis being based on a criterion of selection previously defined, and implemented by a processor.
During sub-step ci), the reconstruction surface can be a plane; it may in particular be a plane of the sample, along which the sample extends.
According to one embodiment, during step ci), the propagation operator is applied to a so-called intermediate image, obtained by applying an operator to the first image so as to cover a field of observation similar to the first image. , and to include a number of pixels less than the number of pixels of the first image. The process can then include the following steps:
civ) application of a propagation operator from the first image, in the region of interest selected in sub-step ciii), so as to calculate a complex expression of the exposure light wave according to a surface of reconstruction, extending opposite the detection plane, and in particular along the plane of the sample, defining a complex image of interest;
cv) from the complex image of interest calculated, forming an image, called reconstructed image of interest, as a function of the module and / or of the phase of the complex expression;
cvi) display of the reconstructed image of interest.
The number of pixels in the intermediate image can be at least 2 times, or even at least 10 times less than the number of pixels in the first image.
According to one embodiment, the method comprises, prior to step c), a step of calibrating a position of the sample relative to the detection plane, the calibration step comprising the following substeps:
i) selection of a plurality of calibration points on the first acquired image;
ii) determination of an elementary calibration zone around each selected calibration point;
iii) implementation, by a processor, of a digital focusing algorithm, so as to estimate a distance, called calibration distance, between the sample and the detection plane, at the level of each elementary calibration zone;
iv) partitioning of the first acquired image into different elementary images, and association, with each elementary image, of a distance between the sample and the detection plane, as a function of the calibration distance determined for each elementary calibration zone, so that the sub-step ci) comprises an application of a propagation operator to each elementary image, according to the distance associated with said elementary image, so as to calculate, for each elementary image, a complex expression of the wave exposure light according to an elementary reconstruction plan;
step cii) comprises the formation of an elementary reconstructed image from the module or the phase of the complex expression calculated during sub-step ci), according to each elementary reconstruction plan, the reconstructed image being obtained by concatenation of each elementary reconstructed image.
The reconstructed image can be used to select a region of interest from the sample. According to this embodiment, the digital focusing algorithm can include the following steps:
an application of a digital propagation operator to each elementary calibration zone in order to obtain, for each of them, a complex image, called a calibration image, of the exposure light wave according to different reconstruction planes respectively spaced apart different distances from the detection plane;
for each elementary calibration area, a determination, for each reconstruction plane, of a sharpness indicator of a reconstruction image obtained as a function of the phase or of the module of the complex expression of calibration calculated in said reconstruction plane ;
a determination of a calibration distance between the sample and the detection plane at each elementary calibration zone, as a function of the calculated sharpness indicators.
According to one embodiment, following the step c), or during the step d), the following sub-steps:
di) implementation, by a processor, of a digital focusing algorithm, so as to estimate a distance between the sample and a detection plane along which the image sensor extends, at the level of the region d interest selected during step c);
dii) relative displacement of the sample relative to the optical system, taking into account the distance thus estimated, so that the sample is placed in a focal plane of the optical system.
The digital focusing algorithm can include:
the application of a digital propagation operator from the first image, so as to calculate a complex expression of the exposure light wave according to a plurality of reconstruction planes respectively located at different reconstruction distances from the detection plane ;
obtaining a reconstruction image at each reconstruction distance, from the phase or the amplitude of the complex expression determined according to each reconstruction plan;
determination of a sharpness indicator at each reconstruction image; determination of the distance between the sample and the detection plane, at the level of the region of interest, as a function of the sharpness indicator determined on each reconstruction image.
The method can be implemented with a device as described in this description.
An object of the invention is also a method of observing a sample comprising the following steps:
1) illumination of the sample using a first light source;
2) acquisition of an image of the sample, called the first image, using a first image sensor, the sample being held between the first light source and the first image sensor, no optics magnification only being disposed between the first image sensor and the sample;
3) obtaining an image, called a reconstructed image, of the sample, step c) comprising the following sub-steps:
applying a propagation operator from the first image, so as to calculate a complex expression of the exposure light wave according to a reconstruction surface, extending opposite the detection plane, defining a complex image;
from the calculated complex image, formation of the reconstructed image, as a function of the module and / or of the phase of the complex expression, in particular from the complex image previously obtained.
According to one embodiment, the method comprises, before step 3), a step of calibrating a position of the sample relative to the detection plane, the calibration step comprising the following substeps:
i) selection of a plurality of calibration points on the first acquired image;
ii) determination of an elementary calibration zone around each selected calibration point;
iii) implementation, by a processor, of a digital focusing algorithm, so as to estimate a distance, called calibration distance, between the sample and the detection plane, at the level of each elementary calibration zone;
iv) partitioning of the first acquired image into different elementary images, and association, with each elementary image, of a distance between the sample and the detection plane, as a function of the calibration distance determined for each elementary calibration zone, so that step 3) includes an application of a propagation operator to each elementary image, according to the distance associated with said elementary image, so as to calculate, for each elementary image, a complex expression of the light wave exhibition according to a basic reconstruction plan;
step 3) also includes the formation of an elementary reconstructed image from the module or the phase of the complex expression thus calculated according to each elementary reconstruction plan, the reconstructed image being obtained by a combination, for example a concatenation, of each elementary reconstructed image.
The digital focusing algorithm can be as described in this description.
Other advantages and characteristics will emerge more clearly from the description which follows of particular embodiments of the invention, given by way of nonlimiting examples, and represented in the figures listed below.
FIGURES
FIG. 1A represents an embodiment of a device according to the invention, configured according to a first mode of observation of a sample. FIG. 1B represents the device of FIG. IA configured according to a second mode of observation of the sample. FIG. 1C shows an example of a first light source, able to equip a device according to the invention. Figure ID shows another embodiment of a device according to the invention. FIG. 1E shows a mobile plate in rotation of a device according to the invention.
FIG. 2A represents the main steps of a method for observing a sample according to a first embodiment. FIG. 2B represents the main steps of a method for observing a sample according to a second embodiment. Figure 2C shows the main steps making up step 120 described in connection with Figure 2B. FIG. 2D is an image of a sample, comprising cells, obtained according to the first observation mode. FIGS. 2E and 2F are images, obtained according to the second observation mode, of regions of interest selected in the image of FIG. 2D. FIG. 2G is an image of a sample, comprising a tissue blade, obtained according to the first observation mode. FIG. 2H is an image obtained according to the second observation mode, of regions of interest selected in the image of FIG. 2G.
FIG. 3A is an image of a sample, called the reconstruction image, obtained according to a variant of the first observation mode. FIGS. 3B and 3C are images, obtained according to a variant of the first observation mode, of regions of interest selected in the image of FIG. 3A.
FIG. 4A is an image, known as a reconstructed image, of a sample comprising dividing cells, the latter being the subject of a region of interest, materialized by a clear frame. FIG. 4B is an image, known as a reconstructed image, of a sample comprising white blood cells, the latter being the subject of a region of interest, materialized by a dark frame. FIG. 4C is an image, known as a reconstructed image, of a sample comprising infected cells, the latter being the subject of a region of interest, materialized by a dark frame. FIG. 5 represents the steps of an embodiment.
Figure 6A shows a sample tilted relative to an image sensor. FIG. 6B shows the main steps of a method making it possible to take into account the inclination shown diagrammatically in FIG. 6A, so as to obtain images, said to be reconstructed, corrected for this inclination. Figures 6C and 6D illustrate steps shown in Figure 6B.
FIG. 7A is an image of the sample obtained according to the first mode, the sample being inclined relative to an image sensor. The images 7B, 7C and 7D are images reconstructed on the basis of FIG. 7A, without taking the inclination into account. Figures 7E, 7F and 7G are images reconstructed on the basis of Figure 7A, taking into account the inclination.
FIG. 8 represents a view of a screen of a device according to the invention.
EXPLANATION OF PARTICULAR EMBODIMENTS
Figures IA and IB show an example of a bimodal microscopy device according to the invention. FIG. 1A represents the device according to a lensless imaging mode, while FIG. 1B shows the device according to a conventional microscopy mode. The device comprises a first light source 11, capable of emitting a first light wave 12 propagating towards a sample 10, along a propagation axis Z, along a spectral emission band Δλ.
The sample 10 is placed on a sample holder 10s. The sample can be a medium, for example a liquid medium, in which particles are immersed, or on the surface of which particles are arranged. It may for example be a biological or body fluid. By particle is meant, for example, objects whose diameter is less than 1 mm, or even 100 μm, or objects inscribed in a circle of such a diameter. The particles can be cells, microorganisms, for example bacteria, spores, microbeads. The medium can also be an agar, suitable for the development of bacterial colonies, or a solid. Sample 10 can also be a tissue slide intended for histological analysis, or pathology slide, comprising a thin thickness of tissue deposited on a transparent slide. By thin thickness is meant a thickness preferably less than 100 μm, and preferably less than 10 μm, typically a few micrometers. Such a tissue blade can be obtained according to known preparation methods, from a tissue sample taken by biopsy or smear, then prepared so as to be in the form of a thin thickness deposited on a transparent blade, this last serving as support. Such methods are known in the field of histology. They include, for example, a cut from a frozen tissue, or an inclusion of a tissue taken from a paraffin matrix. The tissue can then be dyed, for example using a coloring agent of the HES (Hematoxylin-Eosin-Saffron) type.
Generally, the thickness of the sample 10, along the propagation axis Z, is preferably between 20 μm and 500 μm. The sample extends along a plane Pw> said plane of the sample, preferably perpendicular to the axis of propagation Z. It is maintained on the support 10s at a distance d from a first image sensor 16.
Preferably, the optical path traveled by the first light wave 12 before reaching the sample 10 is greater than 5 cm. Advantageously, the light source, seen by the sample, is considered as a point. This means that its diameter (or its diagonal) is preferably less than a tenth, better a hundredth of the optical path between the sample and the light source. The light source 11 can be, for example, a light-emitting diode or a laser source, for example a laser source. It can be associated with diaphragm 18, or spatial filter. The opening of the diaphragm 18 is typically between 5 μm and 1 mm, preferably between 50 μm and 500 μm. In this example, the diaphragm is supplied by Thorlabs under the reference P150S and its diameter is 150 μm. The diaphragm can be replaced by an optical fiber, the first end of which is placed opposite the light source 11 and the second end of which is placed opposite the sample 10.
The device may include a diffuser 17, disposed between the light source 11 and the diaphragm 18. The use of such a diffuser makes it possible to overcome constraints of centering of the light source 11 relative to the opening of the diaphragm 18. The function of such a diffuser is to distribute the light beam produced by the light source according to a cone of angle a. Preferably, the angle of diffusion a varies between 10 ° and 80 °. The presence of such a diffuser makes it possible to make the device more tolerant with respect to a decentralization of the light source relative to the diaphragm. The diaphragm is not necessary, in particular when the light source is sufficiently punctual, in particular when it is a laser source.
Preferably, the emission spectral band Δλ of the incident light wave 12 has a width less than 100 nm. By spectral bandwidth is meant a width at half height of said spectral band.
The device, as shown in FIG. 1A, comprises a prism 15, capable of reflecting the first incident light wave 12 towards the sample 10. The use of such a prism makes it possible to keep the light sources stationary relative to the sample. Such a prism is optional.
The first image sensor 16 is able to form a first image / x according to a detection plane P o . In the example shown, it is an image sensor comprising a pixel matrix, of the CCD or CMOS type, and the surface of which is generally greater than 10 mm 2 . The area of the pixel matrix, called the detection area, depends on the number of pixels and their size. It is generally between 10 mm 2 and 50 mm 2 . The detection plane P o preferably extends perpendicular to the axis of propagation Z of the incident light wave 12. The distance d between the sample 10 and the pixel matrix of the image sensor 16 is preferably between 50 pm and 2 cm, preferably between 100 pm and 2 mm.
Note the absence of magnification optics between the first image sensor 16 and the sample 10. This does not prevent the possible presence of focusing microlenses at each pixel of the first image sensor 16, these the latter having no function of enlarging the image acquired by the first image sensor.
Due to the proximity between the first image sensor 16 and the sample 10, the first image h is acquired according to a first field of observation Ω χ slightly smaller than the area of the image sensor, that is to say i.e. typically between 10 mm 2 and 50 mm 2 . It is a high field of observation if we compare it with the field of observation conferred by a microscope objective with strong magnification, for example a objective of magnification greater than 10. Thus, the first image / x allows d '' obtain exploitable information from the sample according to a first observation field Ω χ high. An important element of the invention is to take advantage of this high field of observation in order to select an area of interest ROI of the sample on or from the first image I lt and then to analyze the area of interest selected. by a conventional microscope objective 25 having a magnification greater than 1, or even greater than 10.
Under the effect of the first incident light wave 12, the sample can generate a diffracted wave 13, capable of producing, at the level of the detection plane P o , interference with part of the first incident light wave 12 transmitted by the 'sample. Thus, the light wave 14, called the exposure light wave, transmitted by the sample 10 and to which the first image sensor 16 is exposed, can comprise:
a component 13 resulting from the diffraction of the first incident light wave 12 by the sample;
a component 12 ', transmitted by the sample, and resulting from the absorption of the first incident light wave 12 by the sample.
These components form interference in the detection plane. Also, the first image h acquired by the image sensor includes interference patterns (or diffraction patterns), each interference pattern being generated by the sample. For example, when the sample contains particles, an interference pattern can be associated with each particle. The first image then makes it possible to locate the particles, to count them or even to identify a particle based on the morphology of the diffraction figure which is associated with it, as described for example in WO2008090330. We can then select a region of interest on the first image I ± , then carry out a more in-depth analysis of the region of interest using the second modality described below, in connection with FIG. 1B.
A processor 40, for example a microprocessor, is configured to process each image / x acquired by the image sensor 16, and allow, for example, the selection of the region of interest ROI as well as any holographic reconstruction or processing operations image described in this application. In particular, the processor is a microprocessor connected to a programmable memory 42 in which is stored a sequence of instructions for performing the image processing and calculation operations described in this description. The processor can be coupled to a screen 44 allowing the display of images acquired by the image sensor 16 or calculated by the processor 40.
As previously described, the first image / x may be sufficient to locate a region of interest ROI, which it seems advisable to analyze in more depth. This is for example the case when the sample 10 comprises particles, the latter being able to be the subject of a more detailed analysis by implementing the second modality described below.
The device represented in FIG. IA comprises a second light source 21, as well as an optical system 25, with a magnification greater than 1. The second light source 21 emits a second incident light wave 22 propagating up to the sample. A second image sensor 26 is coupled to the optical system 25, the second sensor 26 being arranged in the image focal plane of the optical magnification system 25. The second sensor 26 makes it possible to obtain detailed information of the region of interest ROI of the selected sample, according to a second field of observation Ω 2 reduced with respect to the first field of observation Ω ν The first light source 11 and the second light source 21 can be placed facing the sample 10, to which case the prism 15 is not useful.
The sample can be moved relatively relative to the first image sensor 16 and the optical system 25, so as to be arranged:
either according to a first wide-field observation mode, using the first image sensor 16, as previously described, this first mode being shown in FIG. IA;
either according to a second magnification observation mode, using the second image sensor 26. In this second mode, the sample is placed in the object focal plane of the optical system 25. In other words, in the second mode , the sample is placed with respect to the second image sensor 26, so that the latter can acquire a second image I 2 of the sample 10, clear, through the optical system 25. The second method is shown in Figure IB.
Preferably, the sample 10 is kept stationary, while the image sensor 16 and the optical system 25 are moved relative to the sample, between the two observation modes. There is shown in Figures IA and IB, a movable stage 30, supporting the first image sensor 16 and the optical system 25, and allowing their displacement relative to the sample 10. Alternatively, the sample is mounted on a mobile support 10s, making it possible to move it either facing the first image sensor 16, or facing the optical system 25. As illustrated in FIG. 1B, the mobile stage can be able to allow movement parallel to the axis of propagation of light Z, or in an XY plane perpendicular to this axis. The processor 40 can control the mobile stage 30, so as to determine, in each mode, a relative position of the sample 10 with respect to the first image sensor 16 or with respect to the optical system 25.
The optical system 25 is in particular a microscope objective, the magnification of which is preferably greater than or equal to 5 or even 10. The device can comprise several optical systems 25, 25 ′, with different magnifications. In the second mode, the sample is illuminated by the second light source 21, for example a white light source. The second light source 21 is not necessarily different from the first light source 11. The second image I 2 , acquired by the second image sensor, through the lens 25, makes it possible to obtain a detailed representation of the ROI region of interest identified in the first image / x .
Preferably, the relative displacement of the first image sensor 16 and the optical system 25 is calculated automatically by the processor 40, as a function of the region of interest ROI of the sample selected by an operator from the first image / x . For example, the first image / x can be displayed on the screen 44. The operator then makes a selection of the region of interest ROI using a selector 41, the latter being notably a peripheral accessory of the processor 40, of the computer mouse or keyboard type. The selection of the region of interest leads to actuation of the stage 30 to place the sample according to the second analysis mode, that is to say facing the objective 25. The selector allows manual selection of the ROI region of interest of the sample, but an automatic selection can be carried out by a processor, for example the processor 40, as described later, in connection with FIGS. 4A to 4C.
We understand that the combination of these two methods saves time during the analysis of the sample, by concentrating the fine analysis, obtained by objective 25, on a limited number of regions of interest, the latter being determined by the large field image / x acquired by the first modality. This avoids unnecessary and time-consuming scanning of the entire surface of a sample using a microscope objective.
FIG. 1C represents an embodiment according to which the first light source 11 comprises three elementary light sources lli, 11 2 and 11 3 , emitting respectively in a first spectral band Δλι = 450nm - 465 nm, a second spectral band Δλ 2 = 520nm - 535 nm and a third spectral band Δλ 3 = 620nm - 630 nm. These three elementary light sources are here light emitting diodes. In this example, the light source is a light emitting diode supplied by CREE under the reference Xlamp MCE. The three elementary light-emitting diodes 11, 11 2 and 11 3 composing it are activated simultaneously. Alternatively, these light-emitting diodes can be activated successively. With such a light source, the diffuser 17 is particularly useful, because it allows a certain decentralization of one or more elementary light sources.
The first image sensor 16 can comprise a Bayer filter, so that each pixel is sensitive to a spectral band chosen from blue, red or green. Thus, when the sample 10 is exposed to such a first light source 11, the first image sensor 16 acquires a first image f1 which can be broken down into:
a first image Α (ΔΛ Χ ) in the first spectral band Δλι of emission of the first light-emitting diode lli, this image being formed from the pixels exposed to a wavelength transmitted by the blue filter of the Bayer filter; a first image Α (ΔΛ 2 ) in the second spectral band ΔΛ 2 of emission of the second light-emitting diode 11 2 , this image being formed from the pixels exposed to a wavelength transmitted by the green filter of the Bayer filter ; a third image Α (ΔΛ 3 ) in the third spectral band ΔΛ 3 of emission of the third light-emitting diode 11 3 , this image being formed from the pixels exposed to a wavelength transmitted by the red filter of the Bayer filter .
In general, according to this embodiment, the image sensor 20 allows the acquisition of first images fl (AAj) of the sample 10 in different spectral bands ΔΛ;. Each first image fl (AAj) is representative a light wave 14 ,, to which the first image sensor 16 is exposed, in each spectral band Afl. Preferably, there is no overlap between the different spectral bands; negligible overlap, for example concerning less than 25%, better still less than 10% of the light intensity emitted, is however conceivable.
Other configurations are possible, for example the use of a monochrome image sensor, acquiring a first image fl (AAj) of the sample when the latter is successively illuminated by an incident wave 12 ,, and this in different spectral bands ΔΛ (. Each incident wave 12, can be emitted by a light source 11, emitting in one of said spectral bands, or by a white light source filtered by an optical filter whose passband corresponds to said spectral band AAj.
FIG. ID represents an embodiment in which the first source 11 is a laser source, for example a laser diode. In such a configuration, the diffuser 17 and the spatial filter 18 are not necessary.
FIG. 1E shows an embodiment in which the plate 30 takes the form of a turret, on which are fixed a first image sensor 16, supported by a sensor support 16s, as well as two objectives 25, 25 'having different magnifications. The turret is able to rotate so as to place the first sensor 16 or one of the objectives facing the sample 10.
FIG. 2A represents the main steps of a method for observing a sample as previously described. These steps are:
Step 100: illumination of the sample 10 using the first light source 11, the sample being placed facing the first image sensor 16.
Step 110: acquisition of a first image / x of the sample 10 using the first image sensor 16.
Step 130: selection, manual or automatic, of an ROI area of interest on the first image h Step 140: relative movement of the sample 10 relative to the objective 25, so as to arrange the ROI area of interest l 'sample 10 against this objective.
Step 150: illumination of the sample 10 using the second light source 21.
Step 160: acquisition of a second image I 2 representing the area of interest ROI using the second image sensor 26, through the objective 25.
Step 170: output of algorithm or relative displacement of the sample relative to the first image sensor 16, so as to place the sample facing the first image sensor 16.
When the number of diffracting elements in the sample increases, the first image / x acquired by the first image sensor 16 may not allow a reliable selection of the region of interest. This may be the case when the sample contains particles and the concentration of the particles is high. This is also the case when the sample is a thin strip of tissue as previously described. In this case, the region of interest is not selected on the first image I ± , but on an image I z called reconstructed from the first image. Such an embodiment is shown in Figure 2B. Steps 100, 110, 140, 150, 160 and 170 are identical to those described in connection with FIG. 2A. The process includes the following steps:
Step 120: determination of an image I z , said to be reconstructed, representative of the sample. This image is obtained by applying a holographic propagation operator h, as described below, to the first image I lt so as to calculate a complex image A z representing the complex amplitude of the light wave d exposure 14 along a surface extending substantially parallel to the detection plane P o , at a distance z, called the reconstruction distance, from the latter. The reconstructed image I z is obtained from the module and / or from the phase of the complex amplitude A z thus calculated. By substantially parallel is meant parallel, an angular tolerance of plus or minus 10 ° or 20 ° being allowed.
Step 130: selection, manual or automatic, of an area of interest ROI on the reconstructed image I z .
In this description, the term reconstructed image designates an image I z formed from the module or from the phase of the exposure light wave 14 according to a reconstruction surface parallel to the detection plane. The reconstructed image is determined from the module or from the phase of the complex image A z . This surface can be a plane P z , located at a reconstruction distance z from the detection plane P o . It can also be several planes, parallel to the detection plane, and located at different distances z w from the detection plane, so as to take into account an inclination of the plane Pw along which the sample 10 extends relative to to the detection plan P o .
The reconstructed image I z is obtained by applying a holographic propagation operator h from the first image I ± acquired by the first image sensor 16. Such a method, designated by the term holographic reconstruction, makes it possible in particular to reconstruct a image of the module or of the phase of the exposure light wave 14 in a reconstruction plane P z parallel to the detection plane P o , and in particular in the plane î'io along which the sample extends. For this, a convolution product of the first image I ± is carried out by a propagation operator h. It is then possible to reconstruct a complex expression A of the light wave 14 at any point of coordinates (x, y, z) of space, and in particular in a reconstruction plane P z located at a reconstruction distance | z | of the image sensor 16, called the reconstruction distance, this reconstruction plane preferably being the plane of the sample Pw> with: A (x, y, z) = I ± (x, y, z) * h * designating the convolution product operator. In the remainder of this description, the coordinates (x, y) designate a radial position in a plane perpendicular to the axis of propagation Z. The coordinate z designates a coordinate along the axis of propagation Z. The complex expression A is a complex quantity whose argument and modulus are respectively representative of the phase and the intensity of the light wave 14 of exposure. The convolution product of the first image f by the propagation operator h makes it possible to obtain a complex image A z representing a spatial distribution of the complex expression A in a reconstruction plane P z , extending to a coordinate z of the detection plan P o . In this example, the detection plane P o has the equation z = 0. The complex image A z corresponds to a complex image of the exposure wave 14 in the reconstruction plane P z . It also represents a two-dimensional spatial distribution of the optical properties of the exposure wave 14.
The function of the propagation operator h is to describe the propagation of light between the image sensor 16 and a point of coordinates (x, y, z), located at a distance | z | of the first image sensor. It is then possible to determine the module M (x, y, z) and / or the phase φ (x, y, z) the light wave 14, at this distance | z |, called reconstruction distance, with:
M (x, y, z) = abs [A (x, y, z)];
- <p (x, y, z) = arg [X (x, y, z)];
The operators abs and arg respectively designate the module and the argument.
The propagation operator is for example the Fresnel-Helmholtz function, such that: h (x, y, z) = - ^ e '^ expQE ^^).
In other words, the complex expression A of the light wave 14, at any point of coordinates (x, y, z) of space, is such that: A (x, y, z) = M (yy, z ) eP x ' y ' zi (3). | is possible to form images, called reconstructed images, M z and φ ζ representing respectively the module or the phase of complex expression A in a plane P z located at a distance | z | of the detection plane P o , with M z = mod (A z ) and φ ζ = arg (A z ).
However, a simple application of the propagation operator h on the first image generally leads to the production of a complex image A z affected by significant reconstruction noise. This is due to the fact that the first image I lt acquired by the image sensor 16 does not include information as to the phase of the exposure light wave.
14. It is then possible to implement iterative algorithms, so as to progressively estimate the phase of the exposure light wave 14 in the detection plane P o , which then makes it possible to obtain a complex image A z of l light wave 14 more exact in a reconstruction plane P z .
The inventors have developed an iterative algorithm, described in the publication SNA Morel, A. Delon, P. Blandin, T. Bordy, O. Cioni, L. Hervé, C. Fromentin, J. Dinten, and C. Allier, Wide- Field Lensfree Imaging of Tissue Slides, in Advanced Microscopy Techniques IV; and Neurophotonics II, E. Beaurepaire, P. So, F. Pavone, and E. Hillman, eds., Vol. 9536 of SPIE Proceedings (Optical Society of America, 2015) as well as in patent application FR1554811 filed on May 28, 2015, and more specifically in steps 100 to 500 described in this application. According to this algorithm, the sample is illuminated successively or simultaneously in different spectral bands AAj, using a light source 11 as described in Figure IC. A first image bfAAj) is acquired in the detection plane P o in each spectral band. The algorithm makes it possible to obtain a complex image 4 z (AAj) of the light wave 14, in a reconstruction plane P z , in each spectral band AAj. The complex images 4 z (AAi) thus obtained, in each spectral band AAj, can be combined, for example by performing an average, in each pixel, of their module and of their phase, which makes it possible to form a complex image A z . Alternatively, the reconstructed image A z is obtained from the module or from the phase of a complex image 4 z (AAj) in one of the spectral bands AAj. The main steps of this algorithm are shown in Figure 2C.
Step 121: initialization, from the first image bfAAj) acquired by the image sensor 16 in each spectral band AAj. This corresponds to step 100 described in the patent application FR1554811 previously mentioned. This initialization makes it possible to obtain an initial complex image 40 (AAj), representative of the exposure light wave 14 in the detection plane P o , in each spectral band AAj.
Step 122: propagation of each first image bfAAj) in a reconstruction plane P z , located at the reconstruction distance z from the detection plane P o . This corresponds to step 200 described in patent application FR1554811. A complex image 4 z (AAj) is then obtained representing the exposure wave 14, in the reconstruction plane P z , and in each spectral band AAj. The reconstruction plan is preferably the plan of the sample T'io ·
Step 123: combination of each complex image 4 z (AAj) so as to obtain a weighting function in the reconstruction plane P z . This corresponds to step 300 described in patent application FR1554811. The weighting function can be a weighted sum of each complex image 4 z (AAj).
Step 124: propagation of the weighting function in the detection plane P o , by applying the propagation operator h to the weighting function. This corresponds to step 400 described in patent application FR1554811.
Step 125: updating of the complex image 40 (AAj), representative of the exposure light wave 14 in the detection plane P o , in each spectral band ΔΑ ;. This update is carried out from the weighting function propagated in the detection plane P o during the previous step. The phase of the complex image 4 0 (AAj), in each spectral band, ΔΑ; is updated by being replaced by the phase of the weighting function propagated in the detection plane P o . This corresponds to step 400 described in patent application FR1554811.
Each complex image, in the detection plane, thus updated, is then propagated in the reconstruction plane, according to step 122. Steps 122 to 125 are implemented iteratively until the phase of the complex image in the detection plane P o or in the plane of the sample Pw is considered to be correctly estimated.
From the complex image 4 z (AAj) obtained, in the reconstruction plane P z , in each spectral band ΔΑ ί ( we can obtain a reconstructed image I z , for example:
by performing an average of the module or of the phase of the complex image 4 Ζ (ΔΑ () in each spectral band ΔΑ;: the reconstructed image I z is then a grayscale image;
by combining the module and / or the phase of the complex image 4 z (AAj) in each spectral band ΔΑ ;, which makes it possible to obtain a reconstructed image I z in color representing the module or the phase.
Other algorithms can be used to obtain a complex image representing the exposure wave 14, in a reconstruction surface P z facing the first image sensor
16. Such algorithms are for example described in patent application FR1652500 filed on March 23, 2016. Such algorithms can be implemented from a first light source 11 emitting a first light wave 12 in a single spectral band , without departing from the scope of the invention.
FIG. 2D represents an image reconstructed by implementing the algorithm described in connection with FIG. 2C. It represents a spatial distribution of the module of a sample composed of floating Jurkat cells in a PBS liquid buffer, acronym of Phosphate Buffer Saline, meaning phosphate buffered saline. An ink task was carried out, so as to form a visual cue. It will be noted that this reconstructed image makes it possible to observe the sample according to a high field of observation and an acceptable precision. It allows easy selection of an ROI area of interest in the sample. FIGS. 2E and 2F represent observations of each area of interest using a magnification objective 25, according to the second observation mode.
The experimental conditions of this test are as follows: light source 11: CREE reference Xlamp MCE;
first image sensor: IDS MT9J003 monochrome, 3840 * 2748 pixels, each pixel measuring 1.67 mm side, for a detection area of 6.4 * 4.6 mm (29.4 mm 2 ); distance d between the sample 10 and the first image sensor 16: 1.5 mm; spatial filter: opening 150 μm;
objective 25: Olympus - magnification 20;
second image sensor 26: Mightex SE-C050-U - color sensor comprising 2560 * 1920 pixels.
According to one embodiment, the region of interest ROI selected from the first image R is the subject of a reconstruction, so as to obtain a reconstructed image, representative of the light wave 14, only from the region of interest.
FIG. 2G an image representing a blade of tissue obtained on a mouse and colored according to an HES coloration. The image of FIG. 2G is a reconstructed image I z representing the module of a complex amplitude reconstructed, according to the algorithm described in connection with FIG. 2C, in the plane of the tissue slide (plane of the sample) . FIG. 2H represents a region of interest of this plate, identified on the reconstructed image I z and represented by a black outline on the image 2G, and observed using a × 20 magnification objective.
According to a variant, a reconstructed image I z is obtained from a first so-called intermediate image If obtained by applying a bining to the first image / x . The fact of carrying out a reconstruction on the intermediate image Ι makes it possible to reduce the computation time, and to obtain a reconstructed image I z more quickly. From this reconstructed image, a ROI region of interest is selected. It is then possible to obtain a reconstructed image of interest I zR0I , limited to the region of interest previously selected. This reconstructed image of interest is obtained by applying holographic reconstruction algorithms, such as previously described or cited, not on the whole of the first image I lt but only on the part of the first image I 1R0I corresponding to the region d interest selected. FIGS. 3A, 3B and 3C correspond to such an embodiment. In FIG. 3A, a reconstructed image has been represented by considering an intermediate image Ι obtained by applying a 4 * 4 bining to the image acquired / x by the first image sensor 16. We then obtain a reconstructed image I z of lower quality than that shown in FIG. 2D, but sufficient to allow the selection of regions of interest ROI. When the operator selects a ROI region of interest, the selected region of interest is reconstructed on the basis of only the part of the acquired image I ± delimited by the region of interest. This results in a more precise reconstructed image of interest I zR0I , the reduced dimension of the region of interest ROI authorizing the use of a reconstruction based on a part h, king of the first image / x of high spatial resolution. FIGS. 3B and 3C represent reconstructed images of interest corresponding to the regions of interest materialized in FIG. 3A.
Figure 3D illustrates the main steps of this variant. Step 110 corresponds to the acquisition of the first image / x by the first image sensor. Step 119 is the application of an operator, for example a bining, so as to obtain an intermediate image Ι covering a field of observation similar to the first image, but comprising a number of pixels less than the number of pixels of the first image / x . Preferably, the intermediate image Ι comprises at most two times less pixels than the first image I lt or even at least 4 times or 10 times less pixels than the first image / x .
Step 120 corresponds to the reconstruction of the complex expression of the exposure light wave 14, according to a reconstruction plan P z , not from the first image I lt but from the intermediate image 1 ^ A complex image A z is thus obtained, from which the reconstructed image I z is extracted, in the reconstruction plane P z , from the module and / or from the phase of the complex image A z .
Step 130 is the selection of a region of interest ROI, on the reconstructed image I z .
Step 132 corresponds to selection of a part I 1R0I of the first image corresponding to the region of interest ROI selected during the previous step.
Step 134 is a reconstruction of the complex expression of the exposure light wave 14, according to a reconstruction plan P z , from the part I 1R0I of the first image / x corresponding to the region of interest ROI selected during step 130. This gives a reconstructed image of interest I zR0I in the reconstruction plane P z , from the module and / or the phase of the complex image, called complex image of interest A zR0I reconstructed in the region of interest.
Steps 140 to 170, as previously described, can then be implemented, so as to obtain an image of the region of interest selected during step 130 through the lens 25, using the second image sensor 26. However, in certain cases, the reconstructed image of interest may be sufficient to obtain a correct representation of the sample 10.
According to one embodiment, the ROI region of interest of the sample is not selected manually, using a mouse or keyboard type selector 41, but automatically, by implementing a processing algorithm. image, using a processor, for example processor 40. The selection of the region of interest is carried out as a function of a previously determined selection criterion. This criterion is for example a morphological criterion, in which case a region of interest is automatically detected in case of correspondence with the morphological criterion. FIG. 4A shows a reconstructed so-called phase image I z , representing the phase of the complex amplitude of the exposure light wave according to a reconstruction plane, the latter coinciding with the plane of the sample. . In this example, the sample consists of cells, some of which divide. The inventors have observed that cell division can be traced by an abrupt increase in phase. It is then possible to automatically detect, in the reconstructed image I z , the pixels crossing a certain intensity threshold, and to define a region of interest around such pixels. A simple thresholding in intensity of the phase image is sufficient to automatically locate the regions of interest. Each region of interest can then be successively observed in more detail, in particular using the second modality. In FIG. 4A, the regions of interest are materialized by a white outline.
FIG. 4B shows an image of a blood smear colored in Giesma, in which white blood cells are sought. It is a reconstructed image I z represents the phase of the complex amplitude of the exposure light wave 14, according to the plane of the sample. In such a situation, one searches for regions of interest containing a particle of interest, in this case a white blood cell. The regions of interest are identified according to a morphological analysis based on a size and gray level criterion, the white blood cells appearing in the form of homogeneous and dark gray level spots, of a predetermined size. According to such a criterion of interest, the regions of interest can be automatically detected, the latter being materialized by a black outline.
FIG. 4C shows cells infected with tissue. An immunohistochemical staining agent specific to the Epstein Bar virus (or EBV) was previously applied. EBV-infected cells appear as dark spots in the image in Figure 4C. It is a reconstructed image l z represents the modulus of the complex amplitude of the exposure light wave, according to the plane of the sample. In such an image, the infected cells appear in the form of dark spots, which it is easy to identify by a thresholding. According to such a criterion of interest, the regions of interest can be automatically detected, the latter being materialized by a black outline.
Furthermore, as described in connection with the prior art, the first image / x , or the image l z reconstructed with the first image, can be used to monitor the position, or tracking, of moving particles. Each region can be associated with a region of interest, which region can be analyzed periodically using the second method.
According to one embodiment, described in FIG. 5, the first image / x is also used to determine the distance between the sample 10 and the first image sensor 16 (or the detection plane P o ), at the level of ROI region of interest. For this, a processor, for example the processor 40, implements a digital focusing algorithm. Such an algorithm is known to those skilled in the art. The main steps are described in connection with Figure 5.
Step 136: The first image / x , or part of the first image corresponding to the region of interest ROI, is propagated by a digital propagation operator h, as previously described, according to a several reconstruction planes P z , each reconstruction plane extending at a distance z different from the detection plane T'io- A complex expression of the light wave 14 is then obtained according to these different reconstruction planes P z , thus forming as many complex images A z . A stack of complex images is thus obtained. From each complex image A z , a reconstruction image A ′ z is established representing the module and / or of the phase of the complex expression in each reconstruction plane R z considered.
Step 137: A sharpness indicator q z is assigned to each reconstruction image A ' z . The sharpness indicator q z can be an indicator quantifying a dispersion of each image A ' z , for example a standard deviation or a variance. It can also be defined by convolving each reconstruction image by a Sobel operator. For example, we can define a Sobel operator S x along the X axis and a Sobel operator S y along the Y axis. If (x, y) denote the pixels of the reconstruction image A ' z , l' sharpness indicator q z associated with each reconstruction image A ' z can then be such that dz = Ύ / Α ' ζ * Sx ^ + ( A ' z * Sy ^
x.y
-1 0 1 '1 2 1 ' We can take: S x = —2 0 2 and Sy - 0 0 0 -1 0 1. -1 —2 -1.
Step 138: The evolution of the sharpness indicator q z as a function of the depth makes it possible to identify the distance actually corresponding to the distance z R0I between the detection plane P o and the sample, at the level of the region d ROI interest. Depending on the sharpness indicator selected, the distance z ROI generally corresponds to a minimum or a maximum, depending on the Z axis, of the value of the sharpness indicator. The distance z R0I is then used, during step 140, so as to automatically arrange the region of interest ROI of the sample according to the focal plane of the objective 25. We then avoid, or limit, an adjustment manual for focusing the lens 25 before acquiring the second image I 2 .
The complex images A z forming the complex image stack can be obtained from a complex image, called the initial image, A z = d formed at a distance d established a priori between the first image sensor 16 and the sample . This initial complex image is obtained by implementing an iterative reconstruction algorithm, as previously described. It is then propagated in the different reconstruction planes P z by a simple convolution by a propagation operator h. For example, a complex image A z , in the reconstruction plane P zl is obtained simply by the operation: A ZI = A z = d * h d zl where h d zl represents a propagation operator representing the propagation of the light between the plane P z = d and the plane P z = zl . The distance between two adjacent propagation planes is determined as a function of the precision with which it is desired to determine the distance z R0I between the detection plane P o and the region of interest ROI of the sample.
One of the advantages of the first modality lies in the first field observed, the latter being extended relative to the field observed by a microscope objective. However, in such a field, the sample may not be strictly parallel to the image sensor 16, as shown in FIG. 6A. Also, during the application of the reconstruction algorithm, the subject of step 120, a reconstruction distance z can be optimal for a part of the sample, but not for another part. This can lead to a degradation of the spatial resolution of the reconstructed image l z . The inventors have in fact observed that it was optimal for the reconstruction plane P z to be the plane Pio along which the sample 10 extends. Also, if the reconstruction distance z does not correspond to the distance between the plane of detection and sample plan, the quality of the reconstruction is degraded.
In order to solve this problem, it is proposed to implement step 120 by considering not a single propagation distance z, corresponding to the distance between the plane of the sample Pw and the detection plane P o , but of take into account any variation in the distance between the sample and the detection plane. For this, prior to step 120, the image I lt acquired by the first image sensor 16, can be partitioned different elementary parts I lw , to each elementary part corresponding to a reconstruction distance z w , representative of the distance between the sample and the first image sensor 16 at the level of the elementary part. Step 120 is then implemented separately from each elementary part I lw , so as to carry out a reconstruction of the exposure light wave 14 reaching the image sensor 10 at said elementary part, according to the reconstruction distance z w associated with the elementary part I lw . In other words, to each elementary part I lw , a reconstruction distance z w is assigned corresponding to an estimate, in said elementary part, of the distance between the plane of the sample P w and the detection plane P o .
However, the position of the sample relative to the first image sensor 16 is generally not known a priori. A calibration phase, comprising the steps shown in FIG. 6B, is implemented, so as to estimate an evolution of the distance between the sample 10 and the image sensor 16. These steps are as follows:
Step 111: selection of a plurality of calibration points U n on the first acquired image Iy. This involves determining at least two points, and preferably at least three points on the first image ^ acquired by the first image sensor. These points are preferably as far apart as possible from each other. It can for example be four points situated at the corners of the first image fr, to which one or two points can optionally be added at the center of the image. The number of calibration points must be low enough for calibration to be performed quickly. It can be between 2 and 10. In FIG. 6C, 4 calibration points are represented located at the corners of the first image k Step 112: determination of a calibration zone V n around each calibration point. Typically, a calibration zone extends over at least 10 pixels around a calibration point U n . A calibration zone can thus include 10 x 10 pixels, or even 20 x 20 pixels, or more, for example 50 x 50 pixels
Steps 113 to 115 aim to implement a digital focusing algorithm, as previously described, on each calibration zone V n , so as to estimate a distance z n between the sample 10 and the detection plane P o au level of each calibration zone V n .
Step 113: application of a propagation operator h to each elementary calibration zone V n in order to obtain, for each of them, a complex image A nz called calibration, of the light exposure wave 14 n reaching the detection plane P o at the level of said elementary calibration zone V n , according to different reconstruction planes P z respectively spaced by different distances z, from the detection plane. The reconstruction planes P z are, for example, spaced a few microns apart, for example 10 μm, over a range of distances capable of containing the sample 10. This gives a complex image A nz of the light wave 14 n to which is exposed the first image sensor 16, in the elementary calibration zone V n , according to the different reconstruction planes P z .
Step 114: for each elementary calibration zone V n , association, with each reconstruction plane P z , of a dispersion indicator q nz of a so-called reconstruction image A ' nz obtained from the phase and / or the module of the complex expression of the exposure light wave 14, from the complex calibration image A nz reconstructed during step 113. The dispersion indicator q nz can be a standard deviation of the module or of the complex expression phase reconstructed on each reconstruction plane.
Step 115: on each elementary calibration zone V n , determination of a calibration distance z n , as a function of the different sharpness indicators q nz . The calibration distance z n is the distance between the sample 10 and the detection plane P o , at the level of each elementary calibration zone V n . Generally, this step amounts to applying a digital autofocus so as to determine the reconstruction plane P z in which an image of the phase or of the module of the complex image A nz are the sharpest, this reconstruction plane being then considered to be located at the level of the sample 10. The sharpness indicator q nz can be one of those described in connection with FIG. 5. The calibration distance z n generally corresponds to a particular value of the sharpness, for example a minimum value or a maximum value depending on the sharpness indicator selected.
Step 116: partition of the first acquired image / x into different elementary parts I lw , and association, to each elementary part, of a distance z w separating the sample 10 from the detection plane P o . Each of these distances is determined as a function of the calibration distances z n established for each elementary calibration zone V n , for example by two-dimensional interpolation. The interpolation can be a linear interpolation. The number W of elementary parts I lw can be determined a priori, or as a function of the difference between different calibration distances. The more the sample is tilted, the higher the number of elementary parts can be.
Step 120 then comprises a reconstruction of the complex amplitude of the exposure light wave 14 by independently considering each elementary part I lw of the first image I ± . Thus, each elementary part of the first image is propagated according to the distance z w which has been allocated to it during step 116. There are then carried out as many reconstructions as there are elementary parts I lw , these reconstructions being carried out simultaneously or successively, but independently of each other. Complex images A zw of the exposure light wave 14 are then obtained according to different elementary reconstruction planes P zw , each elementary reconstruction plane extending to the distance z w detection plane - sample determined at the level of the part elementary I lw .
We can then form a reconstructed image I zw , called an elementary reconstructed image, from the module or from the phase of the expression calculated during sub-step ii), according to the different elementary reconstruction plans P zw . FIG. 6D represents a section showing different elementary reconstruction planes P zw. Adjacent to each other. The reconstructed images I zw can be concatenated to form a reconstructed image I z .
This embodiment can be used for a simple observation of the sample, thanks to the reconstructed image I z . Furthermore, when an area of interest is selected and then observed by a second image sensor, through a lens 25, the distances z w can be used to automatically position the region of interest selected in the focal plane of the image. Objective 25. This then takes advantage of good knowledge of the position of the sample relative to the detection plane.
FIG. 7A represents a reconstructed image I z of a blade of human tissue. Different regions of interest have been represented in the image of FIG. 7A. Figures 7B, 7C and 7D represent these regions of interest by considering only a single reconstruction distance. Figures 7E, 7F and 7G respectively show the regions of interest shown in Figures 7B, 7C and 7D, implementing the embodiment shown in Figure 6B. Each region of interest was obtained by taking into account an inclination of the sample relative to the detection plane. It can be seen that this improves the spatial resolution, in particular by comparing the images 7B and 7E, or 7C and 7F.
FIG. 8 shows an example of display that can be obtained on screen 44. On the left part of the screen (1), an image l z has been represented reconstructed from a sample formed by a slide of biological tissue, considering the modulus of a complex expression reconstructed in the plane of the sample. On the right side of the screen (2), a region of interest ROI has been selected selected from the reconstructed image, obtained with a microscope objective.
The invention can be applied in the health field, or in other fields in which it is necessary to have an overview of a sample, while allowing an easy analysis of selected regions of interest. in this last. Thus, the invention will have applications in the field of environmental control, agrifood, biotechnology or in monitoring industrial processes.
权利要求:
Claims (20)
[1" id="c-fr-0001]
1. Device (1) for observing a sample (10) comprising:
a support (10s), intended to hold the sample;
a first light source (11), capable of emitting an incident light wave (12) propagating to the sample;
a first image sensor (16), capable of acquiring a first image (fl) of the sample illuminated by the incident light wave (12), the support (10s) being configured to hold the sample between the first source light (11) and the first image sensor (16), so that no magnification optics are arranged between the sample and the first image sensor, the first image sensor being exposed to a light wave (14) called exposure, the first image (fl) defining a first field of observation (Ω χ ) of the sample;
the device also comprising:
a second image sensor (26), optically coupled to an optical system (25) having a magnification greater than 1, so as to acquire a second image (fl) of the sample, maintained on the support 10s, according to a second field of observation (Ω 2 ) reduced compared to the first field of observation (Ω 1 ).
[2" id="c-fr-0002]
2. Device according to claim 1, comprising a second light source (21), capable of illuminating the sample (10) during the acquisition of the second image of the sample.
[3" id="c-fr-0003]
3. Device according to any one of the preceding claims, comprising a movement mechanism (30) relative to the sample (10) relative to the first image sensor (16) and to the optical system (25), so as to alternate between:
a first mode, according to which the sample is placed in the observation field (Ω χ ) of the first image sensor (16), so as to acquire the first image (fl);
a second mode, according to which the sample is placed in the observation field (Ω 2 ) of the second image sensor (26), so as to acquire the second image (fl).
[4" id="c-fr-0004]
4. Device according to claim 3, comprising:
a selector (41), capable of allowing the selection of a region of interest (ROI) in the first image (fl);
a processor (40), configured to determine a relative position of the sample (10) relative to the optical system (25), according to which the selected area of interest (ROI) extends in the second field of observation ( Ω 2 );
such that the movement mechanism (30) is configured to automatically position the sample (10) relative to the optical system (25) according to said relative position determined by the processor.
[5" id="c-fr-0005]
5. Device according to claim 4, comprising a processor (40) configured to apply a digital propagation operator (h) from the first image (/ x ), so as to:
calculating a complex expression of the exposure light wave (14) according to a reconstruction surface (P z ) extending opposite the first image sensor (16), forming an image (I z ), called reconstructed, at starting from the module and / or from the phase of said complex expression;
so that in the second mode, the position of the sample relative to the optical system (25) is defined as a function of an area of interest (ROI) selected from the reconstructed image (I z ).
[6" id="c-fr-0006]
6. Device according to any one of claims 4 or 5, wherein the first image sensor (16) extending along a detection plane (P o ), the device comprises a processor (40) configured to apply a digital focusing from the first image (/ x ), so as to estimate a distance (z R0I ) between the sample (10) and the detection plane (P o ), at the level of the region of interest (ROI ), so that the relative position of the sample (10) relative to the optical system (25) is determined as a function of the distance thus estimated.
[7" id="c-fr-0007]
7. Device according to any one of claims 3 to 6, wherein the first image sensor (16) and the second image sensor (26) being fixed, the movement mechanism (30) is adapted to move the 'sample (10):
facing the first image sensor (16) in the first mode; and facing the optical system (25) in the second modality.
[8" id="c-fr-0008]
8. Device according to any one of claims 3 to 6, in which the sample is fixed, the movement mechanism is capable of:
moving the first image sensor (16) to bring it facing the sample (10), in the first mode;
and / or move the optical system (25) to bring it facing the sample (10), in the second mode.
[9" id="c-fr-0009]
9. Method for observing a sample (10) comprising the following steps:
a) illumination of the sample using a first light source (11);
b) acquisition of an image of the sample, called the first image (/ x ), using a first image sensor (16), the first image sensor being exposed to a light wave called d exposure (14), the sample being held between the first light source and the first image sensor, no magnification optics being arranged between the first image sensor and the sample;
c) selection of a region of interest (ROI) of the sample from the first image (/ x );
d) relative displacement of the sample with respect to an optical system (25), having a magnification greater than 1, the optical system being optically coupled to a second image sensor (26), the displacement being effected automatically by a mechanism displacement (30), so that the region of interest of the sample (ROI), selected during step c) is located in an observation field (Ω 2 ) of the second image sensor ( 26);
e) illumination of the sample (10) using a second light source (21) and acquisition of an image of the region of interest of the sample (ROI), called the second image (I 2 ), using the second image sensor (26).
[10" id="c-fr-0010]
10. The method of claim 9, wherein the relative displacement of the sample makes it possible to automatically switch between:
a first mode, according to which the sample is placed in an observation field (Ω χ ) of the first image sensor (16), so as to acquire the first image (/ x );
a second mode, according to which the sample is placed in the observation field (Ω 2 ) of the second image sensor (26), so as to acquire the second image (I 2 ).
[11" id="c-fr-0011]
11. Method according to any one of claims 9 to 10, in which during step c), the region of interest (ROI) is selected on the first image (/ x ), using a manual selector (41) or by an analysis of the first image (/ x ), the analysis being based on a previously defined selection criterion, and implemented by a processor (40).
[12" id="c-fr-0012]
12. Method according to any one of claims 9 to 10, in which the first image sensor extending along a detection plane (P o ), step c) comprises the following sub-steps:
ci) application of a propagation operator (h) from the first image (fi), so as to calculate a complex expression of the exposure light wave (14) according to a reconstruction surface (P z ), extending opposite the detection plane (P o );
cii) formation of an image, called reconstructed image (fi), as a function of the module and / or of the phase of the complex expression calculated during the sub-step ci);
ciii) selection of the area of interest (ROI) from the reconstructed image (fi).
[13" id="c-fr-0013]
13. The method of claim 12, wherein during step ci), the propagation operator is applied to a so-called intermediate image (Ifi), obtained from the first image (fi), and comprising a number of pixels less than the number of pixels in the first image (fi), step c) also comprising the following substeps:
civ) application of a propagation operator (h) from the first image, in the region of interest (ROI) selected in sub-step ciii), so as to calculate a complex expression of the light wave d exposure according to a reconstruction surface, extending opposite the detection plane, defining a complex image of interest (A zR0I );
cv) from the complex image of interest (A zR0I ) calculated, formation of an image, called reconstructed image of interest (fi iRO i), as a function of the module and / or of the phase of the expression complex;
cvi) display of the reconstructed image of interest (I zRO i) ·
[14" id="c-fr-0014]
14. Method according to any one of claims 12 or 13, in which during the sub-step ciii), the region of interest (ROI) is selected on the reconstructed image (fi), using a selector manual (41) or by an analysis of the reconstructed image (fi), the analysis being based on a previously defined selection criterion, and implemented by a processor (40).
[15" id="c-fr-0015]
15. Method according to any one of claims 12 to 14, in which during the sub-step ci), the reconstruction surface is a plane, called the plane of the sample (P w ), along which extends the sample (10).
[16" id="c-fr-0016]
16. Method according to any one of claims 12 to 15 comprising, prior to step c), a step of calibrating a position of the sample (10) relative to the detection plane (P o ), l '' calibration step including the following sub-steps:
i) selection of a plurality of calibration points (U n ) on the first acquired image (fi);
ii) determination of an elementary calibration zone (RJ around each selected calibration point (U n );
iii) implementation, by a processor (40), of a digital focusing algorithm, so as to estimate a distance, called the calibration distance (z n ), between the sample (10) and the detection plane ( P o ), at the level of each elementary calibration zone (l / J;
iv) partition of the first acquired image (D into different elementary images and association, with each elementary image, of a distance (z w ) between the sample and the detection plane (P o ), as a function of the distance from calibration (z n ) estimated for each elementary calibration zone (l / J;
so that the sub-step ci) comprises an application of a propagation operator to each elementary image (/ lw ), according to the distance (z w ) associated with said elementary image, so as to calculate, for each elementary image , a complex expression of the exposure light wave (14) according to an elementary reconstruction plan (P zw ); step cii) includes the formation of an elementary reconstructed image (I ZiW ) from the module or the phase of the complex expression calculated during sub-step ci), according to each elementary reconstruction plan (P zw ), the reconstructed image (I z ) being obtained by concatenation of each elementary reconstructed image (I ZiW ).
[17" id="c-fr-0017]
17. The method as claimed in claim 16, in which the digital focusing algorithm comprises:
an application of a digital propagation operator (h) to each elementary calibration zone (l / J in order to obtain, for each of them, a complex image, called calibration image, (A nz ) of the wave exposure light (14) according to different reconstruction planes (P z ) respectively spaced by different distances (z), from the detection plane (P o );
for each elementary calibration zone (V n ), a determination, for each reconstruction plane (P z ), of a sharpness indicator (q nz ) of a reconstruction image obtained (A ' nz ) from the phase and / or module of the complex expression of calibration calculated in said reconstruction plan (P z );
a determination of a calibration distance (z n ) between the sample and the detection plane at each elementary calibration zone (l / J, as a function of the calculated sharpness indicators (q nz ).
[18" id="c-fr-0018]
18. Method according to any one of claims 9 to 17 in which step d) comprises the following substeps:
di) implementation, by a processor, of a digital focusing algorithm, so as to estimate a distance (z R0I ) between the sample (10) and a detection plane (P o ) along which the image sensor (16), at the region of interest (ROI) selected during step c);
5 dii) relative movement of the sample (10) relative to the optical system (25), taking into account the distance thus estimated, so that the sample is arranged in a focal plane of the optical system (25).
[19" id="c-fr-0019]
19. The method of claim 18, wherein the digital focusing algorithm comprises:
10 - the application of a digital propagation operator (h) from the first image, so as to calculate a complex expression of the exposure light wave according to a plurality of reconstruction planes (P z ) respectively located at different distances (z) for reconstruction of the detection plane (P o );
obtaining a reconstruction image (A ' z ) at each reconstruction distance, at
Starting from the phase or the amplitude of the complex expression determined according to each reconstruction plan (P z );
determination of a sharpness indicator (q z ) at each reconstruction image; determination of the distance (z R0I ) between the sample and the detection plane (P o ), at the level of the region of interest (ROI), according to the sharpness indicator (q z ) determined
[20" id="c-fr-0020]
20 on each reconstruction image.
Jt.
类似技术:
公开号 | 公开日 | 专利标题
EP3519899B1|2020-07-22|Device for observing a sample and method for observing a sample
EP3274694B1|2019-12-18|Method for determining the state of a cell
EP3433679A1|2019-01-30|Method for observing a sample, by calculation of a complex image
WO2016151248A1|2016-09-29|Method for analysing particles
WO2016189257A1|2016-12-01|Method for observing a sample
EP3199941B1|2021-10-20|Method for observing a sample by lens-free imaging
EP3433678A1|2019-01-30|Holographic method for characterising a particle in a sample
EP3520022A1|2019-08-07|Method for counting white blood cells in a sample
EP3559631B1|2021-01-06|Method for counting particles in a sample by means of lensless imaging
FR3046239A1|2017-06-30|DEVICE AND METHOD FOR BIMODAL OBSERVATION OF AN OBJECT
EP3545362B1|2022-03-09|Method for forming a high resolution image by lensless imaging
FR3054037A1|2018-01-19|DEVICE FOR OBSERVING A SAMPLE
EP3499221B1|2021-08-11|Device and method for observing a sample with a chromatic optical system
WO2019243725A1|2019-12-26|Method and device for counting thrombocytes in a sample
FR3087009A1|2020-04-10|METHOD FOR DETERMINING PARAMETERS OF A PARTICLE
FR3081552A1|2019-11-29|DEVICE AND METHOD FOR OBSERVING A FLUORESCENT SAMPLE BY DEFOCUSED IMAGING
FR3112866A1|2022-01-28|Systems and methods for microscopic analysis of a sample
FR3103894A1|2021-06-04|Line scanning microscopy devices and methods
EP3423882A1|2019-01-09|Quantitative non-linear optical microscope
同族专利:
公开号 | 公开日
US20200033580A1|2020-01-30|
EP3519899A1|2019-08-07|
FR3057068B1|2020-03-20|
US10754141B2|2020-08-25|
EP3519899B1|2020-07-22|
WO2018060619A1|2018-04-05|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
WO2014118263A1|2013-01-31|2014-08-07|Commissariat à l'énergie atomique et aux énergies alternatives|Method for regulating the relative position of an analyte in relation to a light beam|
FR3030748A1|2014-12-17|2016-06-24|Commissariat Energie Atomique|OBJECT OBSERVATION SYSTEM|
WO2016107995A1|2014-12-30|2016-07-07|Commissariat à l'énergie atomique et aux énergies alternatives|System for analysing a transparent sample with control of position, and associated method|
GB0701201D0|2007-01-22|2007-02-28|Cancer Rec Tech Ltd|Cell mapping and tracking|
WO2010027000A1|2008-09-03|2010-03-11|オリンパス株式会社|Microscope and microscope control method|
JP5639654B2|2009-10-20|2014-12-10|ザ リージェンツ オブ ザ ユニバーシティ オブ カリフォルニア|On-chip incoherent lens-free holography and microscopy|
US8842901B2|2010-12-14|2014-09-23|The Regents Of The University Of California|Compact automated semen analysis platform using lens-free on-chip microscopy|
US20150153558A1|2012-06-07|2015-06-04|The Regents Of The University Of California|Wide-field microscopy using self-assembled liquid lenses|
FR3002634B1|2013-02-28|2015-04-10|Commissariat Energie Atomique|METHOD OF OBSERVING AT LEAST ONE OBJECT, SUCH AS A BIOLOGICAL ENTITY, AND ASSOCIATED IMAGING SYSTEM|
DE102013006994A1|2013-04-19|2014-10-23|Carl Zeiss Microscopy Gmbh|Digital microscope and method for optimizing the workflow in a digital microscope|
EP3095004A4|2014-01-14|2017-09-20|Applied Scientific Instrumentation, Inc.|Light sheet generator|FR3082944A1|2018-06-20|2019-12-27|Commissariat A L'energie Atomique Et Aux Energies Alternatives|METHOD FOR OBSERVING A SAMPLE WITH LENS-FREE IMAGING, TAKING INTO ACCOUNT A SPATIAL DISPERSION IN THE SAMPLE|
WO2020071834A1|2018-10-04|2020-04-09| 솔|Image sensor module, small digital microscope and small digital microscope array system, real-time three-dimensional digital microscope, and digital microscope and digital microscope system in which high magnification image isguided by low magnification image|
KR102259841B1|2018-11-06|2021-06-02| 솔|Digital microscope in which high-magnification image is guided by low-magnification image and digital microscope system|
FR3090875B1|2018-12-22|2020-12-11|Commissariat Energie Atomique|Method of observing a sample in two spectral bands, and according to two magnifications.|
法律状态:
2017-09-29| PLFP| Fee payment|Year of fee payment: 2 |
2018-04-06| PLSC| Search report ready|Effective date: 20180406 |
2018-09-28| PLFP| Fee payment|Year of fee payment: 3 |
2019-09-30| PLFP| Fee payment|Year of fee payment: 4 |
2020-09-30| PLFP| Fee payment|Year of fee payment: 5 |
优先权:
申请号 | 申请日 | 专利标题
FR1659432|2016-09-30|
FR1659432A|FR3057068B1|2016-09-30|2016-09-30|DEVICE FOR OBSERVING A SAMPLE AND METHOD FOR OBSERVING A SAMPLE|FR1659432A| FR3057068B1|2016-09-30|2016-09-30|DEVICE FOR OBSERVING A SAMPLE AND METHOD FOR OBSERVING A SAMPLE|
EP17786960.9A| EP3519899B1|2016-09-30|2017-09-27|Device for observing a sample and method for observing a sample|
PCT/FR2017/052625| WO2018060619A1|2016-09-30|2017-09-27|Device for observing a sample and method for observing a sample|
US16/337,153| US10754141B2|2016-09-30|2017-09-27|Device for observing a sample and method for observing a sample|
[返回顶部]